π οΈ All DevTools
Showing 1801–1820 of 4370 tools
Last Updated
April 28, 2026 at 12:01 AM
SWI-Prolog 10.0.0 Released
Hacker News (score: 18)[Other] SWI-Prolog 10.0.0 Released
FastShortcuts
Product Hunt[Other] β Master 500+ keyboard shortcuts for developers FastShortcuts is a free, comprehensive database of 500+ keyboard shortcuts for developers. Find shortcuts for VS Code, Windows, Mac, IntelliJ, Chrome & more. Download PDFs, search instantly, and boost your productivity by 10x. 100% free forever.
Uncloud - Tool for deploying containerised apps across servers without k8s
Hacker News (score: 24)[DevOps] Uncloud - Tool for deploying containerised apps across servers without k8s
Show HN: RAG in 3 Lines of Python
Show HN (score: 7)[Other] Show HN: RAG in 3 Lines of Python Got tired of wiring up vector stores, embedding models, and chunking logic every time I needed RAG. So I built piragi.<p><pre><code> from piragi import Ragi kb = Ragi(\["./docs", "./code/\*\*/\*.py", "https://api.example.com/docs"\]) answer = kb.ask("How do I deploy this?") </code></pre> That's the entire setup. No API keys required - runs on Ollama + sentence-transformers locally.<p>What it does:<p><pre><code> - All formats - PDF, Word, Excel, Markdown, code, URLs, images, audio - Auto-updates - watches sources, refreshes in background, zero query latency - Citations - every answer includes sources - Advanced retrieval - HyDE, hybrid search (BM25 + vector), cross-encoder reranking - Smart chunking - semantic, contextual, hierarchical strategies - OpenAI compatible - swap in GPT/Claude whenever you want </code></pre> Quick examples:<p><pre><code> # Filter by metadata answer = kb.filter(file_type="pdf").ask("What's in the contracts?") #Enable advanced retrieval kb = Ragi("./docs", config={ "retrieval": { "use_hyde": True, "use_hybrid_search": True, "use_cross_encoder": True } }) # Use OpenAI instead kb = Ragi("./docs", config={"llm": {"model": "gpt-4o-mini", "api_key": "sk-..."}}) </code></pre> Install:<p><pre><code> pip install piragi PyPI: https://pypi.org/project/piragi/ </code></pre> Would love feedback. What's missing? What would make this actually useful for your projects?
Show HN: Stanford's ACE paper was just open sourced
Show HN (score: 5)[Other] Show HN: Stanford's ACE paper was just open sourced Last month, the SambaNova team, in partnership with Stanford and UC Berkeley, introduced the viral paper Agentic Context Engineering (ACE), a framework for building evolving contexts that enable self-improving language models and agents. Today, the team has released the full ACE implementation, available on GitHub, including the complete system architecture, modular components (Generator, Reflector, Curator), and ready-to-run scripts for both Finance and AppWorld benchmarks. The repository provides everything needed to reproduce results, extend to new domains, and experiment with evolving playbooks in your own applications.
Show HN: Microlandia, a brutally honest city builder
Show HN (score: 116)[Other] Show HN: Microlandia, a brutally honest city builder It all started as an experiment to see if I could build a game making heavy use of Deno and its SQLite driver. After sharing an early build in the βWhat are you working on?β thread here, I got the encouragement I needed to polish it and make a version 1.0 for Steam.<p>So here it is, Microlandia, a SimCity Classic-inspired game with parameters from real-life datasets, statistics and research. It also introduces aspects that are conveniently hidden in other games (like homelessness), and my plan is to continue updating, expanding and perfecting the models for an indefinite amount of time.
Show HN: MCP Gateway β Unifying Access to MCP Servers Without NΓM Integrations
Show HN (score: 9)[Other] Show HN: MCP Gateway β Unifying Access to MCP Servers Without NΓM Integrations Many teams connecting LLMs to external tools eventually encounter the same architectural issue: as more tools and agents are added, the integration pattern becomes an NΓM mesh of direct connections. Each agent implements its own auth, retries, rate limiting, and logging; each tool needs credentials distributed to multiple places and observability becomes fragmented.<p>We built LLM gateway with this goal to provide a single place to manage authentication, authorization, routing, and observability for MCP servers, with a path toward a more general agent-gateway architecture in the future.<p>The system includes a central MCP registry, support for OAuth2/DCR integration, Virtual MCP Servers for curated toolsets, and a playground for experimenting with tool calls.<p>Resources -<p>Architecture Blog β Covers the NΓM problem, gateway motivation, design choices, auth layers, Virtual MCP Servers, and the overall model.<p><a href="https://www.truefoundry.com/blog/introducing-truefoundry-mcp-gateway" rel="nofollow">https://www.truefoundry.com/blog/introducing-truefoundry-mcp...</a><p>Tutorial β Step-by-step guide to writing an MCP server, adding Okta-based OAuth, and integrating it with the Gateway.<p><a href="https://docs.truefoundry.com/docs/ai-gateway/mcp-server-oauth-okta" rel="nofollow">https://docs.truefoundry.com/docs/ai-gateway/mcp-server-oaut...</a><p>Feedback on gaps and edge cases is welcome.<p><a href="https://www.truefoundry.com/mcp-gateway" rel="nofollow">https://www.truefoundry.com/mcp-gateway</a>
Building a Toast Component
Hacker News (score: 12)[Other] Building a Toast Component
Show HN: Fresh β A new terminal editor built in Rust
Hacker News (score: 40)[IDE/Editor] Show HN: Fresh β A new terminal editor built in Rust I built Fresh to challenge the status quo that terminal editing must require a steep learning curve or endless configuration. My goal was to create a fast, resource-efficient TUI editor with the usability and features of a modern GUI editor (like a command palette, mouse support, and LSP integration).<p>Core Philosophy:<p>- <i>Ease-of-Use:</i> Fundamentally non-modal. Prioritizes standard keybindings and a minimal learning curve.<p>- <i>Efficiency:</i> Uses a lazy-loading piece tree to avoid loading huge files into RAM - reads only what's needed for user interactions. Coded in Rust.<p>- <i>Extensibility:</i> Uses TypeScript (via Deno) for plugins, making it accessible to a large developer base.<p>The Performance Challenge:<p>I focused on resource consumption and speed with large file support as a core feature. I did a quick benchmark loading a 2GB log file with ANSI color codes. Here is the comparison against other popular editors:<p><pre><code> - Fresh: Load Time: *~600ms* | Memory: *~36 MB* - Neovim: Load Time: ~6.5 seconds | Memory: ~2 GB - Emacs: Load Time: ~10 seconds | Memory: ~2 GB - VS Code: Load Time: ~20 seconds | Memory: OOM Killed (~4.3 GB available) </code></pre> (Only Fresh rendered the ansi colors.)<p>Development process:<p>I embraced Claude Code and made an effort to get good mileage out of it. I gave it strong specific directions, especially in architecture / code structure / UX-sensitive areas. It required constant supervision and re-alignment, especially in the performance critical areas. Added very extensive tests (compared to my normal standards) to keep it aligned as the code grows. Especially, focused on end-to-end testing where I could easily enforce a specific behavior or user flow.<p>Fresh is an open-source project (GPL-2) seeking early adopters. You're welcome to send feedback, feature requests, and bug reports.<p>Website: <a href="https://sinelaw.github.io/fresh/" rel="nofollow">https://sinelaw.github.io/fresh/</a><p>GitHub Repository: <a href="https://github.com/sinelaw/fresh" rel="nofollow">https://github.com/sinelaw/fresh</a>
Show HN: The Taka Programming Language
Show HN (score: 9)[Other] Show HN: The Taka Programming Language Hi HN! I created a small stack-based programming language, which I'm using to solve Advent of Code problems. I think the forward Polish notation works pretty nicely.
Monostate AItraining
Product Hunt[CLI Tool] Fine-tuning, RL, and inference in one CLI Fine-tune LLMs and ML models with automatic dataset conversion, hyperparameter sweeps, and custom RL environments - monostate/aitraining
beLow
Product Hunt[Other] Inline insights with C/C++ that shows CPU, memory, energy beLow automatically analyzes your C and C++ embedded code to identify performance bottlenecks and generate optimized code tailored to your target hardware. Slash execution time, reduce energy consumption, and accelerate time to market. Designed for developers building in automotive, aerospace, robotics, and other performance-critical systems, beLow simplifies the complex work of embedded code optimization so teams can focus on innovation, not fine-tuning.
Show HN: Golang Client Library for Gradium.ai TTS/STT API
Show HN (score: 5)[API/SDK] Show HN: Golang Client Library for Gradium.ai TTS/STT API
Client-side GPU load balancing with Redis and Lua
Hacker News (score: 29)[Other] Client-side GPU load balancing with Redis and Lua
Detecting AV1-encoded videos with Python
Hacker News (score: 14)[Other] Detecting AV1-encoded videos with Python
Show HN: RunMat β runtime with auto CPU/GPU routing for dense math
Hacker News (score: 17)[Other] Show HN: RunMat β runtime with auto CPU/GPU routing for dense math Hi, Iβm Nabeel. In August I released RunMat as an open-source runtime for MATLAB code that was already much faster than GNU Octave on the workloads I tried. <a href="https://news.ycombinator.com/item?id=44972919">https://news.ycombinator.com/item?id=44972919</a><p>Since then, Iβve taken it further with RunMat Accelerate: the runtime now automatically fuses operations and routes work between CPU and GPU. You write MATLAB-style code, and RunMat runs your computation across CPUs and GPUs for speed. No CUDA, no kernel code.<p>Under the hood, it builds a graph of your array math, fuses long chains into a few kernels, keeps data on the GPU when that helps, and falls back to CPU JIT / BLAS for small cases.<p>On an Apple M2 Max (32 GB), here are some current benchmarks (median of several runs):<p>* 5M-path Monte Carlo * RunMat β 0.61 s * PyTorch β 1.70 s * NumPy β 79.9 s β ~2.8Γ faster than PyTorch and ~130Γ faster than NumPy on this test.<p>* 64 Γ 4K image preprocessing pipeline (mean/std, normalize, gain/bias, gamma, MSE) * RunMat β 0.68 s * PyTorch β 1.20 s * NumPy β 7.0 s β ~1.8Γ faster than PyTorch and ~10Γ faster than NumPy.<p>* 1B-point elementwise chain (sin / exp / cos / tanh mix) * RunMat β 0.14 s * PyTorch β 20.8 s * NumPy β 11.9 s β ~140Γ faster than PyTorch and ~80Γ faster than NumPy.<p>If you want more detail on how the fusion and CPU/GPU routing work, I wrote up a longer post here: <a href="https://runmat.org/blog/runmat-accel-intro-blog" rel="nofollow">https://runmat.org/blog/runmat-accel-intro-blog</a><p>You can run the same benchmarks yourself from the GitHub repo in the main HN link. Feedback, bug reports, and βhereβs where it breaks or is slowβ examples are very welcome.
Show HN: Marmot β Single-binary data catalog (no Kafka, no Elasticsearch)
Hacker News (score: 24)[Other] Show HN: Marmot β Single-binary data catalog (no Kafka, no Elasticsearch)
Nixtml: Static website and blog generator written in Nix
Hacker News (score: 21)[Other] Nixtml: Static website and blog generator written in Nix
A deep dive into QEMU: The Tiny Code Generator (TCG), part 1 (2021)
Hacker News (score: 50)[Other] A deep dive into QEMU: The Tiny Code Generator (TCG), part 1 (2021)
Vibecode DB
Product Hunt[API/SDK] The Frontend Database API Gateway vibecode-db rethinks how frontend apps talk to the backend. Define your schema once and query any backend - SQLite, Supabase, PostgreSQL, REST APIs β using one unified, type-safe interface. No rewrites, no SDK switching, no lock-in. Swap databases anytime, build offline-first apps, and integrate any backend through CustomAdapter. Write once. Use anywhere.